An important aspect in Humanââ?¬â??Robot Interaction is responding to different kinds of touch\nstimuli. To date, several technologies have been explored to determine how a touch is perceived\nby a social robot, usually placing a large number of sensors throughout the robotââ?¬â?¢s shell. In this\nwork, we introduce a novel approach, where the audio acquired from contact microphones located\nin the robotââ?¬â?¢s shell is processed using machine learning techniques to distinguish between different\ntypes of touches. The system is able to determine when the robot is touched (touch detection),\nand to ascertain the kind of touch performed among a set of possibilities: stroke, tap, slap, and tickle\n(touch classification). This proposal is cost-effective since just a few microphones are able to cover\nthe whole robotââ?¬â?¢s shell since a single microphone is enough to cover each solid part of the robot.\nBesides, it is easy to install and configure as it just requires a contact surface to attach the microphone\nto the robotââ?¬â?¢s shell and plug it into the robotââ?¬â?¢s computer. Results show the high accuracy scores in\ntouch gesture recognition. The testing phase revealed that Logistic Model Trees achieved the best\nperformance, with an F-score of 0.81. The dataset was built with information from 25 participants\nperforming a total of 1981 touch gestures.
Loading....